Real-time scheduling

Sphere:  |  Tags: , ,

What are the options for real-time scheduling?

A number of scheduling concepts have been developed for implementation in a real-time operating system (RTOS). The most commonly encountered is the pre-emptive scheduler even though it is not inherently a real-time algorithm in contrast to, for example, deadline scheduling, which aims to ensure that critical threads are executed within a given timeframe.

Desktop operating systems are designed around the concept of fairness – that no application should be starved of processing cycles by another. These systems tend to use round-robin scheduling, in which each task will run for a set period of time before being forced to yield access to the processor so that execution can switch to a different task that is ready to run. Once all tasks that are not blocked from running have been allotted a timeslice, execution resumes with the first task and the cycle continues.

In a real-time system, it is generally acceptable to starve less important tasks of processor cycles if there are critical tasks with work to do – although determining how ‘unimportant’ a task really is can be problematic for guaranteeing overall system stability.

How does the typical scheduler operate?

The simplest possible scheduler conceptually is the main() loop – it simply cycles through a series of functions. As long as the critical functions execute within the maximum allowable processing latency of the system, the loop will provide satisfactory performance. However, every logical task within the system is provided with the same execution priority and will consume processor cycles even if they have no work to do. It becomes very difficult to guarantee that the loop will finish execution within the maximum allowable latency for all situations. Applications also become difficult to maintain beyond a certain size. At this point, it makes sense to break the application down into discrete tasks and use an RTOS scheduler to control their execution.

A pre-emptive RTOS works on the basis that the task with the highest priority and which is ready to run will be the one that is scheduled for execution. Typically, the RTOS will examine the list of tasks after any change of task status – usually after a system call or an interrupt. For example, a task may relinquish control of a mutual-exclusion semaphore (mutex) on which a higher-priority task is blocked. The RTOS will note that the high-priority task is now ready to run and pick it for scheduling. That task will continue execution until it is replaced by a higher-priority task, yields the processor, or becomes blocked again. Because the task can remain running, it is possible that it could starve other tasks of execution time – a risk that system designers need to take into account. Conversely, the RTOS guarantees that the most critical thread that is ready to run will be able to access the processor as soon as it requires it.

What are the common pitfalls in scheduling?

In principle, it is possible to analyze a system for potential scheduling problems and to ensure that the system will meet its deadlines. However, the analysis is greatly complicated by any interprocessor communication. Basic rate-monotonic analysis, one of the earlier theories used for determining schedulability – and the subject of one of the 20 most commonly cited papers in computer science – can only guarantee schedulability for tasks that do not share resources. In practice, most systems demand shared access to memory objects and peripherals that make schedulability, as well as the tendency to deadlock, difficult to predict.

One problem encountered with conventional pre-emptive RTOS schedulers is that of priority inversion. In this situation, a low-priority task obtains access to a shared resource but is pre-empted by a higher priority task, blocking all other tasks that need that resource. If a critical task requires that resource, it cannot run until the low-priority task has released the mutex. But until activity has subsided far enough to allow the low-priority task to run, it will be unable to continue far enough to release the mutex. During this time, the effective priority of the critical task is reduced to that of the low-priority thread: hence priority inversion.

One workaround, although it can introduce other schedulability problems if implemented without safeguards, is to use the priority-inheritance protocol. This mode provides any thread that owns a mutex with the same priority as a more important task that is blocked on it until the semaphore is released.

Many RTOS implementations support priority inheritance or a close relative of the technique, the priority ceiling protocol, which prevents a low-priority task from being elevated to the highest possible priority in the system. There are dangers in using the protocol: designers need to ensure that a normally low-priority task will not simply hog a resource and keep running indefinitely in a state in which it cannot easily be pre-empted.

There also also subtleties in implementation. If an application is moved from a single-core to a dual-core processor that uses the priority-ceiling protocol, it cannot guarantee mutual exclusion. So a distributed priority ceiling protocol has to be used instead.

Because of the problems of analyzing schedulability in asynchronous, interrupt-driven real-time systems, many systems that have to guarantee dependable behaviour resort to some form of strict time-sharing. In this scenario, important tasks are guaranteed a number of cycles within a period of time to run, even though they have nothing to do, just in case they do need to respond to a problem. ARINC 653 avionics systems have used this approach for years and a number of automotive systems have adopted the Flexray architecture, which is based on a similar time-triggered approach.

Each partition in an ARINC 653 system has its own dedicated, protected memory space and each partition can run a multitasking system. Vital functions usually have dedicated partitions. Even with such rigidly enforced partitions, timing problems can still arise through interactions with hardware. One problem that has been identified in a paper by GE Aviation and Wind River Systems lies in the use of direct memory access (DMA). If a partition towards the end of its time-slice decides to initiate a long DMA transfer, the partition that runs immediately afterwards can stall because the DMA hardware has exclusive access to the memory bus – effectively shortening the new partition’s timeslice and creating the potential for it to miss its own deadline.

The recommendation in this case is to transfer the responsibility for setting up DMA transfers to a system-level task that takes into account the amount of time a partition has remaining before it is forced to relinquish the processor.

Similarly, interrupt handling can upset the operation of an otherwise strictly time-sliced system. A number of systems prevent all but the system timer, which is used to help manage scheduling, from being able to assert an interrupt. Others may record the interrupt and then allow the affected system to poll for the associated data when it next runs.

Comments are closed.



PLATINUM SPONSORS

Synopsys Cadence Design Systems Siemens EDA
View All Sponsors